Conversation
a07f42d to
79c3e64
Compare
ecPablo
left a comment
There was a problem hiding this comment.
approving deployment/ folder related changes
79c3e64 to
e4e68f2
Compare
|
| lggr := logger.With(d.lggr, "observationTimestamp", opts.ObservationTimestamp(), "configDigest", opts.ConfigDigest(), "seqNr", opts.OutCtx().SeqNr) | ||
| // Observation loop logic | ||
| { | ||
| // Update the list of streams to observe for this config digest and set the timeout |
There was a problem hiding this comment.
| // Update the list of streams to observe for this config digest and set the timeout | |
| // Update the list of streams to observe and set the timeout |
| } | ||
|
|
||
| if deadline, ok := ctx.Deadline(); ok { | ||
| osv.observationTimeout = time.Until(deadline) |
There was a problem hiding this comment.
This is a little odd. On line 215 we check if the observation is still valid by comparing to 2* this value, which seems a little arbitrary. It would be fine and predictable if we compared to the initial value of 250ms but this is a little strange.
There was a problem hiding this comment.
We set a default value on L334 to ensure we don't exhaust resources if by any chance this gets set to 0 on the on chain config.
here we extract the deadline (which is max obs duration) which defines both the pace and the timeout. hence 2* for the validity of the TTL. This ensures both the cache and the pace are adjusted to the expected loop run-time. ie. we have enough ttl and and pace.




This change move the LLO streams observations into a async continuous loop that provides pre-fetched stream values to the data source, decoupling the pipeline runs from the plugin observe call. Allows for better data freshness, round throughput and resource consumption of the observation stage.